278 research outputs found

    Grasp Analysis Tools for Synergistic Underactuated Robotic Hands

    Get PDF
    Despite being a classical topic in robotics, the research on dexterous robotic hands still stirs a lively research activity. The current interest is especially attracted by underactuated robotic hands where a high number of degrees of freedom (DoFs), and a relatively low number of degrees of actuation co-exist. The correlation between the DoFs obtained through a wise distribution of actuators is aimed at simplifying the control with a minimal loss of dexterity. In this sense, the application of bio-inspired principles is bringing research toward a more conscious design. This work proposes new, general approaches for the analysis of grasps with synergistic underactuated robotic hands.After a review of the quasi-static equations describing the system, where contact preload is also considered, two different approaches to the analysis are presented. The first one is based on a systematic combination of the equations. The independent and the dependent variables are defined, and cause-effect relationships between them are found. In addition, remarkable properties of the grasp, as the subspace of controllable internal force and the grasp compliance, are worked out in symbolic form. Then, some relevant kinds of tasks, such as pure squeeze, spurious squeeze and kinematic grasp displacements, are defined, in terms of nullity or non-nullity of proper variables. The second method of analysis shows how to discover the feasibility of the pre-defined tasks, operating a systematic decomposition of the solution space of the system. As a result, the inputs to be given to the hand, in order to achieve the desired system displacements, are found. The study of the feasible variations is carried out arriving at the discovery of all the combinations of nullity and/or non-nullity variables which are allowed by the equations describing the system. Numerical results are presented both for precision and power grasps, finding forces and displacements that the hand can impose on the object, and showing which properties are preserved after the introduction of a synergistic underactuation mechanism

    Multisensory causal inference in the brain

    Get PDF
    At any given moment, our brain processes multiple inputs from its different sensory modalities (vision, hearing, touch, etc.). In deciphering this array of sensory information, the brain has to solve two problems: (1) which of the inputs originate from the same object and should be integrated and (2) for the sensations originating from the same object, how best to integrate them. Recent behavioural studies suggest that the human brain solves these problems using optimal probabilistic inference, known as Bayesian causal inference. However, how and where the underlying computations are carried out in the brain have remained unknown. By combining neuroimaging-based decoding techniques and computational modelling of behavioural data, a new study now sheds light on how multisensory causal inference maps onto specific brain areas. The results suggest that the complexity of neural computations increases along the visual hierarchy and link specific components of the causal inference process with specific visual and parietal regions

    Vestibular Facilitation of Optic Flow Parsing

    Get PDF
    Simultaneous object motion and self-motion give rise to complex patterns of retinal image motion. In order to estimate object motion accurately, the brain must parse this complex retinal motion into self-motion and object motion components. Although this computational problem can be solved, in principle, through purely visual mechanisms, extra-retinal information that arises from the vestibular system during self-motion may also play an important role. Here we investigate whether combining vestibular and visual self-motion information improves the precision of object motion estimates. Subjects were asked to discriminate the direction of object motion in the presence of simultaneous self-motion, depicted either by visual cues alone (i.e. optic flow) or by combined visual/vestibular stimuli. We report a small but significant improvement in object motion discrimination thresholds with the addition of vestibular cues. This improvement was greatest for eccentric heading directions and negligible for forward movement, a finding that could reflect increased relative reliability of vestibular versus visual cues for eccentric heading directions. Overall, these results are consistent with the hypothesis that vestibular inputs can help parse retinal image motion into self-motion and object motion components

    Investigating human audio-visual object perception with a combination of hypothesis-generating and hypothesis-testing fMRI analysis tools

    Get PDF
    Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis

    A retinal code for motion along the gravitational and body axes

    Get PDF
    Self-motion triggers complementary visual and vestibular reflexes supporting image-stabilization and balance. Translation through space produces one global pattern of retinal image motion (optic flow), rotation another. We examined the direction preferences of direction-sensitive ganglion cells (DSGCs) in flattened mouse retinas in vitro. Here we show that for each subtype of DSGC, direction preference varies topographically so as to align with specific translatory optic flow fields, creating a neural ensemble tuned for a specific direction of motion through space. Four cardinal translatory directions are represented, aligned with two axes of high adaptive relevance: the body and gravitational axes. One subtype maximizes its output when the mouse advances, others when it retreats, rises or falls. Two classes of DSGCs, namely, ON-DSGCs and ON-OFF-DSGCs, share the same spatial geometry but weight the four channels differently. Each subtype ensemble is also tuned for rotation. The relative activation of DSGC channels uniquely encodes every translation and rotation. Although retinal and vestibular systems both encode translatory and rotatory self-motion, their coordinate systems differ

    An International Laboratory for Systems and Computational Neuroscience

    Get PDF
    The neural basis of decision-making has been elusive and involves the coordinated activity of multiple brain structures. This NeuroView, by the International Brain Laboratory (IBL), discusses their efforts to develop a standardized mouse decision-making behavior, to make coordinated measurements of neural activity across the mouse brain, and to use theory and analyses to uncover the neural computations that support decision-making. The neural basis of decision-making has been elusive and involves the coordinated activity of multiple brain structures. This NeuroView, by the International Brain Laboratory (IBL), discusses their efforts to develop a standardized mouse decision-making behavior, to make coordinated measurements of neural activity across the mouse brain, and to use theory and analyses to uncover the neural computations that support decision-making

    Peaks and Troughs of Three-Dimensional Vestibulo-ocular Reflex in Humans

    Get PDF
    The three-dimensional vestibulo-ocular reflex (3D VOR) ideally generates compensatory ocular rotations not only with a magnitude equal and opposite to the head rotation but also about an axis that is collinear with the head rotation axis. Vestibulo-ocular responses only partially fulfill this ideal behavior. Because animal studies have shown that vestibular stimulation about particular axes may lead to suboptimal compensatory responses, we investigated in healthy subjects the peaks and troughs in 3D VOR stabilization in terms of gain and alignment of the 3D vestibulo-ocular response. Six healthy upright sitting subjects underwent whole body small amplitude sinusoidal and constant acceleration transients delivered by a six-degree-of-freedom motion platform. Subjects were oscillated about the vertical axis and about axes in the horizontal plane varying between roll and pitch at increments of 22.5° in azimuth. Transients were delivered in yaw, roll, and pitch and in the vertical canal planes. Eye movements were recorded in with 3D search coils. Eye coil signals were converted to rotation vectors, from which we calculated gain and misalignment. During horizontal axis stimulation, systematic deviations were found. In the light, misalignment of the 3D VOR had a maximum misalignment at about 45°. These deviations in misalignment can be explained by vector summation of the eye rotation components with a low gain for torsion and high gain for vertical. In the dark and in response to transients, gain of all components had lower values. Misalignment in darkness and for transients had different peaks and troughs than in the light: its minimum was during pitch axis stimulation and its maximum during roll axis stimulation. We show that the relatively large misalignment for roll in darkness is due to a horizontal eye movement component that is only present in darkness. In combination with the relatively low torsion gain, this horizontal component has a relative large effect on the alignment of the eye rotation axis with respect to the head rotation axis

    Non-linear stimulus-response behavior of the human stance control system is predicted by optimization of a system with sensory and motor noise

    Get PDF
    We developed a theory of human stance control that predicted (1) how subjects re-weight their utilization of proprioceptive and graviceptive orientation information in experiments where eyes closed stance was perturbed by surface-tilt stimuli with different amplitudes, (2) the experimentally observed increase in body sway variability (i.e. the “remnant” body sway that could not be attributed to the stimulus) with increasing surface-tilt amplitude, (3) neural controller feedback gains that determine the amount of corrective torque generated in relation to sensory cues signaling body orientation, and (4) the magnitude and structure of spontaneous body sway. Responses to surface-tilt perturbations with different amplitudes were interpreted using a feedback control model to determine control parameters and changes in these parameters with stimulus amplitude. Different combinations of internal sensory and/or motor noise sources were added to the model to identify the properties of noise sources that were able to account for the experimental remnant sway characteristics. Various behavioral criteria were investigated to determine if optimization of these criteria could predict the identified model parameters and amplitude-dependent parameter changes. Robust findings were that remnant sway characteristics were best predicted by models that included both sensory and motor noise, the graviceptive noise magnitude was about ten times larger than the proprioceptive noise, and noise sources with signal-dependent properties provided better explanations of remnant sway. Overall results indicate that humans dynamically weight sensory system contributions to stance control and tune their corrective responses to minimize the energetic effects of sensory noise and external stimuli
    • …
    corecore